移情如何影响创造性问题解决?我们引入了基于上下文特定的情感模仿和透视图的计算同理干预,以良好的北极熊的形式出现了虚拟药物的观点。在与1,006名参与者进行的在线实验中,随机分配到情绪启发干预(具有控制的启发条件和愤怒启发条件)和计算同理心干预(具有控制虚拟代理和移情虚拟药物)中,我们研究了愤怒和移情的影响参与者在基于Wordle的单词游戏中的表现。我们发现,分配给愤怒启发条件的参与者在多个绩效指标上的表现要比分配给控制条件的参与者明显差。但是,我们发现同理心虚拟代理抵消了愤怒条件引起的绩效下降,以至于分配给移情虚拟代理和愤怒条件的参与者与在控制启发条件下的参与者的表现没有不同,并且比分配给参与者的参与者明显更好控制虚拟药物和愤怒启发条件。尽管移情减少了愤怒的负面影响,但我们没有发现移情虚拟药物会影响被分配到控制启发条件的参与者的表现的证据。通过引入计算同理心干预的框架并进行两乘两个阶乘设计随机实验,我们提供了严格的经验证据,即计算同理心可以抵消愤怒对创造性问题解决的负面影响。
translated by 谷歌翻译
尽管人工智能(AI)有望支持医疗保健提供者并提高医疗诊断的准确性,但数据集组成的缺乏透明度会使AI模型暴露于无意识和可避免的错误的可能性。特别是,皮肤病学条件的公共图像数据集很少包含有关肤色的信息。作为提高透明度的开始,AI研究人员已经从患者光敏性的度量到估算计算机视觉应用算法审核的肤色估算肤色(包括面部识别和皮肤病学诊断)的肤色估算肤色的度量来使用Fitzpatrick皮肤类型(FST)。为了了解图像上估计的FST注释的可变性,我们比较了来自教科书和在线皮肤病学试图的460张皮肤条件图像的多种FST注释方法。我们发现,三位经过董事会认证的皮肤科医生之间的评估者间可靠性与经过董事会认证的皮肤科医生和两种众包方法之间的评估者间可靠性相媲美。相比之下,我们发现转换为FST(ITA-FST)方法的单个类型学角度与专家注释相比,与专家的注释相关的注释相关的注释明显少于彼此相关。这些结果表明,基于ITA-FST的算法对于注释大规模图像数据集并不可靠,但是以人为本的,基于人群的协议可以可靠地将皮肤类型透明度添加到皮肤病学数据集中。此外,我们介绍了具有可调参数的动态共识协议的概念,包括专家审查,以提高人群的可见性并为未来的大型图像数据集的众包注释提供指导。
translated by 谷歌翻译
在各种领域中,机器学习模型在数据集基准上的精度和现实世界生产数据之间存在性能差距。尽管对静态数据集基准进行了仔细的设计以表示现实世界,但相对于模型已接受培训的数据,模型通常会出错。我们可以直接测量和调整分配转移的某些方面,但是我们无法在不知道数据生成过程的情况下解决样本选择偏见,对抗性扰动和非平稳性。在本文中,我们概述了在上下文中识别变化的两种方法,从而导致分布变化和模型预测错误:利用人类的直觉和专家知识来识别一阶环境,并基于Desiderata为数据生成过程开发动态基准。此外,我们提出了两个案例研究,以突出显示应用机器学习模型的隐式假设,这些假设在试图推广超出测试基准数据集时会导致错误。通过密切关注上下文在每个预测任务中的作用,研究人员可以减少上下文移动错误并提高泛化性能。
translated by 谷歌翻译
超现实视觉效果的技术的最新进展引起了人们的关注,即政治演讲的深层视频很快将与真实的视频录制无法视觉区分。通信研究中的传统观念预测,当故事的同一版本被视为视频而不是文字时,人们会更频繁地跌倒假新闻。在这里,我们评估了41,822名参与者在一个实验中如何将真实的政治演讲与捏造区分开来,在该实验中,演讲被随机显示为文本,音频和视频的排列。我们发现获得音频和视觉沟通方式的访问提高了参与者的准确性。在这里,人类的判断更多地依赖于话语,视听线索比所说的语音内容。但是,我们发现反思性推理调节了参与者考虑视觉信息的程度:认知反射测试的表现较低与对所说内容的过度依赖有关。
translated by 谷歌翻译
Extracting complex structures from grid-based data is a common key step in automated medical image analysis. The conventional solution to recovering tree-structured geometries typically involves computing the minimal cost path through intermediate representations derived from segmentation masks. However, this methodology has significant limitations in the context of projective imaging of tree-structured 3D anatomical data such as coronary arteries, since there are often overlapping branches in the 2D projection. In this work, we propose a novel approach to predicting tree connectivity structure which reformulates the task as an optimization problem over individual steps of a recursive process. We design and train a two-stage model which leverages the UNet and Transformer architectures and introduces an image-based prompting technique. Our proposed method achieves compelling results on a pair of synthetic datasets, and outperforms a shortest-path baseline.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Cohn and Umans proposed a framework for developing fast matrix multiplication algorithms based on the embedding computation in certain groups algebras. In subsequent work with Kleinberg and Szegedy, they connected this to the search for combinatorial objects called strong uniquely solvable puzzles (strong USPs). We begin a systematic computer-aided search for these objects. We develop and implement constraint-based algorithms build on reductions to $\mathrm{SAT}$ and $\mathrm{IP}$ to verify that puzzles are strong USPs, and to search for large strong USPs. We produce tight bounds on the maximum size of a strong USP for width $k \le 5$, construct puzzles of small width that are larger than previous work, and improve the upper bounds on strong USP size for $k \le 12$. Although our work only deals with puzzles of small-constant width, the strong USPs we find imply matrix multiplication algorithms that run in $O(n^\omega)$ time with exponent $\omega \le 2.66$. While our algorithms do not beat the fastest algorithms, our work provides evidence and, perhaps, a path to finding families of strong USPs that imply matrix multiplication algorithms that are more efficient than those currently known.
translated by 谷歌翻译
Agile robotics presents a difficult challenge with robots moving at high speeds requiring precise and low-latency sensing and control. Creating agile motion that accomplishes the task at hand while being safe to execute is a key requirement for agile robots to gain human trust. This requires designing new approaches that are flexible and maintain knowledge over world constraints. In this paper, we consider the problem of building a flexible and adaptive controller for a challenging agile mobile manipulation task of hitting ground strokes on a wheelchair tennis robot. We propose and evaluate an extension to work done on learning striking behaviors using a probabilistic movement primitive (ProMP) framework by (1) demonstrating the safe execution of learned primitives on an agile mobile manipulator setup, and (2) proposing an online primitive refinement procedure that utilizes evaluative feedback from humans on the executed trajectories.
translated by 谷歌翻译
Curating datasets for object segmentation is a difficult task. With the advent of large-scale pre-trained generative models, conditional image generation has been given a significant boost in result quality and ease of use. In this paper, we present a novel method that enables the generation of general foreground-background segmentation models from simple textual descriptions, without requiring segmentation labels. We leverage and explore pre-trained latent diffusion models, to automatically generate weak segmentation masks for concepts and objects. The masks are then used to fine-tune the diffusion model on an inpainting task, which enables fine-grained removal of the object, while at the same time providing a synthetic foreground and background dataset. We demonstrate that using this method beats previous methods in both discriminative and generative performance and closes the gap with fully supervised training while requiring no pixel-wise object labels. We show results on the task of segmenting four different objects (humans, dogs, cars, birds).
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译